Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(yamux): increase max number of buffered inbound streams to 256 #3872

Closed
wants to merge 4 commits into from

Conversation

thomaseizinger
Copy link
Contributor

@thomaseizinger thomaseizinger commented May 3, 2023

Description

With the effort to implement a consistent ACK backlog across all yamux implementations, it makes sense to up this limit to 256 to avoid stream drops as much as possible.

Notes & open questions

Related: paritytech/polkadot-sdk#758.

Change checklist

  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • A changelog entry has been made in the appropriate crates

@thomaseizinger thomaseizinger requested a review from mxinden May 3, 2023 07:13
@drHuangMHT
Copy link
Contributor

Why can't the limit be configurable?

@thomaseizinger
Copy link
Contributor Author

I'd rather not introduce too many tunables and config options.

In an ideal world, we migrate users to a muxer that supports backpressure on newly opened streams (QUIC or qmux). Then this config option simply doesn't exist.

In the absence of a backpressure mechanism, the best we can do is drop streams when the remote overloads us, otherwise we might face a resource exhaustion.

Go has already implemented and we will eventually implement tracking of the ACK backlog (libp2p/rust-yamux#150). Aligning the buffersize with the ack backlog should place us in a sweetspot where we never need to drop a stream.

If we make this limit configurable, users will configure it and shoot themselves in the foot because it will likely not be synced with the remote's ACK backlog.

You are of course welcome to fork libp2p-yamux and do whatever you want :)

@drHuangMHT
Copy link
Contributor

You are of course welcome to fork libp2p-yamux and do whatever you want :)

Was just out of curiosity though.

///
/// Implementations are encouraged to not open more than 256 unacknowledged streams, see <https://github.com/libp2p/specs/tree/master/yamux#ack-backlog>.
/// Thus, it makes sense to buffer up to 256 streams before dropping them for good interoperability.
const MAX_BUFFERED_INBOUND_STREAMS: usize = 256;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I follow the reasoning above. At the point where a new stream is retrieved through this.poll_inner, the stream will be acknowledged to the remote, thus allowing the remote to open another new stream.

As far as I can tell, bumping this limit just delays running into the limit itself, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for questioning this!

At the point where a new stream is retrieved through this.poll_inner, the stream will be acknowledged to the remote

I don't think this is true. We emit streams in rust-yamux without acknowledging them directly because we don't want to send a frame for just acknowledging a stream. I'll have to dig into this in detail but I think we delay the acknowledgement until we send the first payload on this stream.

The first payload will be multistream-select which in our case happens directly after we have removed a stream from this buffer. Meaning if our Connection is slow in removing streams from this buffer, the remote should eventually stop opening new streams.

I think it makes sense to write a test against this just to be sure but for that we land the ACK backlog changes first: libp2p/rust-yamux#150

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the WindowUpdateMode is set to OnRead (which is the default), then we are not sending a Frame on a newly opened inbound stream until the user first uses it. To acknowledge the stream in that case, we preemptively set the ACK flag:

https://github.com/libp2p/rust-yamux/blob/72ccd5734f71971fc1f02ac4a5f43b8eb4dff660/yamux/src/connection.rs#L656

Let me know if you see anything wrong with the reasoning here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realized I still had a fully-working branch for the ACK backlog locally so I just quickly added a test for this as well and it confirmed my intuition: libp2p/rust-yamux#153

@mergify
Copy link
Contributor

mergify bot commented May 9, 2023

This pull request has merge conflicts. Could you please resolve them @thomaseizinger? 🙏

@mxinden
Copy link
Member

mxinden commented May 26, 2023

Instead of continuing here, i.e. buffer in libp2p-yamux, how about instead investing into buffering in yamux right away, thus do libp2p/rust-yamux#154?

@thomaseizinger
Copy link
Contributor Author

Instead of continuing here, i.e. buffer in libp2p-yamux, how about instead investing into buffering in yamux right away, thus do libp2p/rust-yamux#154?

Yes, happy to do that. When I opened this, yamux wasn't as developed yet / I had forgotten that I have the backlog branch still locally 😁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants